254 research outputs found

    Image-Based Visual Servoing Control for Spacecraft Formation Flying

    Get PDF
    This paper proposes an image-based visual-servoing algorithm that allows for optimal formation control. The proposed distributed controller utilizes visual features of other team members, retrieved from images captured by onboard cameras, to autonomously plan and perform formation acquisition, keeping or reconfiguration maneuvers. The problems of minimization of the control effort is analyzed and the paper proposes an optimal framework for developing controllers that address the issue. The viability of such a technique is explored through numerical simulations

    Control visual basado en flujo de movimiento

    Get PDF
    Este artículo describe una nueva técnica para el seguimiento de trayectorias empleando control basado en imagen denominada control visual basado en flujo de movimiento. Esta estrategia permite el seguimiento de trayectorias especificadas en el espacio de la imagen presentando asimismo un correcto desempeño en el espacio cartesiano 3-D. Una de las principales aportaciones conseguidas con la aplicación de control visual basado en flujo de movimiento es la posibilidad de realizar el seguimiento de forma no dependiente del tiempo, lo que permite asegurar un correcto seguimiento de la trayectoria en la imagen al no verse el sistema sometido a restricciones temporales

    Time independent tracking using 2-D movement flow-based visual servoing

    Get PDF
    In this paper, the so-called 2-D movement flowbased visual servoing system is proposed, which allows the tracking of trajectories planned in the image space. This method is time-independent, which is useful when problems such as obstructions occur during the tracking. This method allows the tracking of image trajectories assuring the correct behaviour in the 3-D space and avoiding the limitations of the time-dependent systems used up to now for tracking trajectories. This aspect has allowed its application to manipulation tasks in which the trajectory can be obstructed during the tracking.This work is partially supported by the Spanish MCYT project “DESAURO: Desensamblado Automático Selectivo para Reciclado mediante Robots Cooperativos y Sistema Multisensorial” (DPI2002-02103)

    Movement flow-based visual servoing to track moving objects

    Get PDF
    The purpose of this paper is to describe a new method for tracking trajectories specified in the image space. This method, called movement flow-based visual servoing system, is applied to an eye-in-hand robot and it is shown that it allows the correct tracking of a trajectory, not only in the image but also in the 3-D space. This method is also extended to the case in which the object from which the features are extracted, is in motion. To do so, the estimations obtained, using several Kalman filters, are integrated in the control action

    Nonlinear optimal control for the 4-DOF underactuated robotic tower crane

    Get PDF
    Tower cranes find wide use in construction works, in ports and in several loading and unloading procedures met in industry. A nonlinear optimal control approach is proposed for the dynamic model of the 4-DOF underactuated tower crane. The dynamic model of the robotic crane undergoes approximate linearization around a temporary operating point that is recomputed at each time-step of the control method. The linearization relies on Taylor series expansion and on the associated Jacobian matrices. For the linearized state-space model of the system a stabilizing optimal (H-infinity) feedback controller is designed. To compute the controller’s feedback gains an algebraic Riccati equation is repetitively solved at each iteration of the control algorithm. The stability properties of the control method are proven through Lyapunov analysis. The proposed control approach is advantageous because: (i) unlike the popular computed torque method for robotic manipulators, the new control approach is characterized by optimality and is also applicable when the number of control inputs is not equal to the robot’s number of DOFs, (ii) it achieves fast and accurate tracking of reference setpoints under minimal energy consumption by the robot’s actuators, (iii) unlike the popular Nonlinear Model Predictive Control method, the article’s nonlinear optimal control scheme is of proven global stability and convergence to the optimum.This research work has been partially supported by Grant Ref. “CSP contract 040322”—“Nonlinear control, estimation and fault diagnosis for electric power generation and electric traction/propulsion systems” of the Unit of Industrial Automation of the Industrial Systems Institute

    Analysis and adaptation of integration time in PMD camera for visual servoing

    Get PDF
    The depth perception in the objects of a scene can be useful for tracking or applying visual servoing in mobile systems. 3D time-of-flight (ToF) cameras provide range images which give measurements in real time to improve these types of tasks. However, the distance computed from these range images is very changing with the integration time parameter. This paper presents an analysis for the online adaptation of integration time of ToF cameras. This online adaptation is necessary in order to capture the images in the best condition irrespective of the changes of distance (between camera and objects) caused by its movement when the camera is mounted on a robotic arm.This work is supported by the Spanish Ministry of Education and Science (MEC) through the research project DPI2008-02647, “Manipulación Inteligente mediante percepción háptica y control visual empleando una estructura articular ubidada en el robot manipulador”

    Ground Extraction from 3D Lidar Point Clouds

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works Pomares, A., Martínez, J.L., Mandow, A., Martínez, M.A., Morán, M., Morales, J. Ground extraction from 3D lidar point clouds with the Classification Learner App (2018) 26th Mediterranean Conference on Control and Automation, Zadar, Croatia, June 2018, pp.400-405. DOI: PendingGround extraction from three-dimensional (3D) range data is a relevant problem for outdoor navigation of unmanned ground vehicles. Even if this problem has received attention with specific heuristics and segmentation approaches, identification of ground and non-ground points can benefit from state-of-the-art classification methods, such as those included in the Matlab Classification Learner App. This paper proposes a comparative study of the machine learning methods included in this tool in terms of training times as well as in their predictive performance. With this purpose, we have combined three suitable features for ground detection, which has been applied to an urban dataset with several labeled 3D point clouds. Most of the analyzed techniques achieve good classification results, but only a few offer low training and prediction times.This work was partially supported by the Spanish project DPI 2015- 65186-R. The publication has received support from Universidad de Málaga, Campus de Excelencia Andalucía Tech

    Control visual de robots manipuladores. Una herramienta para su diseño y aprendizaje

    Get PDF
    En este artículo se describe una nueva herramienta para la simulación y ejecución de sistemas de control basados en imagen en robots manipuladores. Esta herramienta, por un lado, permite ajustar de forma fácil e intuitiva los distintos parámetros implicados en una tarea de control visual y, por otro, puede aplicarse a tareas de docencia para facilitar el aprendizaje de este tipo de sistemas de control visual cada vez más extendidos y con un ámbito de aplicación creciente. Esta herramienta tiene implementados algoritmos clásicos de control basados en imagen así como otros más novedosos basados en momentos que confieren mayor flexibilidad al sistema, asimismo, el carácter docente de la misma permite la fácil integración de nuevos algoritmos de control visual para su evaluación.Este trabajo ha sido parcialmente financiado por OMRON tras la concesión del premio OMRON de “Iniciación a la investigación e innovación en automática” durante la convocatoria de 2003

    A ROS/Gazebo-based framework for simulation and control of on-orbit robotic systems

    Get PDF
    The use of simulation tools such as ROS/Gazebo is currently common practice for testing and developing control algorithms for typical ground-based robotic systems but still is not commonly accepted within the space community. Numerous studies in this field use ad-hoc built, but not standardized, not open-source, and, sometimes, not verified tools that complicate, rather than promote, the development and realization of versatile robotic systems and algorithms for space robotics. This paper proposes an open-source solution for space robotics simulations called OnOrbitROS. This paper presents a description of the architecture, the different software modules, and the simulation possibilities of OnOrbitROS. It shows the key features of the developed tool, with a particular focus on the customization of the simulations and eventual possibilities of further expansion of the tool. In order to show these capabilities, a computed torque-based controller for the guidance of a free-floating manipulator is proposed and simulated using the ROS/Gazebo-based framework described in the paper
    corecore